Goto

Collaborating Authors

 space mission


Towards A Catalogue of Requirement Patterns for Space Robotic Missions

Etumi, Mahdi, Taylor, Hazel M., Farrell, Marie

arXiv.org Artificial Intelligence

In the development of safety and mission-critical systems, including autonomous space robotic missions, complex behaviour is captured during the requirements elicitation phase. Requirements are typically expressed using natural language which is ambiguous and not amenable to formal verification methods that can provide robust guarantees of system behaviour. To support the definition of formal requirements, specification patterns provide reusable, logic-based templates. A suite of robotic specification patterns, along with their formalisation in NASA's Formal Requirements Elicitation Tool (FRET) already exists. These pre-existing requirement patterns are domain agnostic and, in this paper we explore their applicability for space missions. To achieve this we carried out a literature review of existing space missions and formalised their requirements using FRET, contributing a corpus of space mission requirements. We categorised these requirements using pre-existing specification patterns which demonstrated their applicability in space missions. However, not all of the requirements that we formalised corresponded to an existing pattern so we have contributed 5 new requirement specification patterns as well as several variants of the existing and new patterns. We also conducted an expert evaluation of the new patterns, highlighting their benefits and limitations.


LoongRL: Reinforcement Learning for Advanced Reasoning over Long Contexts

Wang, Siyuan, Zhang, Gaokai, Zhang, Li Lyna, Shang, Ning, Yang, Fan, Chen, Dongyao, Yang, Mao

arXiv.org Artificial Intelligence

Reasoning over long contexts is essential for large language models. While reinforcement learning (RL) enhances short-context reasoning by inducing "Aha" moments in chain-of-thought, the advanced thinking patterns required for long-context reasoning remain largely unexplored, and high-difficulty RL data are scarce. In this paper, we introduce LoongRL, a data-driven RL method for advanced long-context reasoning. Central to LoongRL is KeyChain, a synthesis approach that transforms short multi-hop QA into high-difficulty long-context tasks by inserting UUID chains that hide the true question among large collections of distracting documents. Solving these tasks requires the model to trace the correct chain step-by-step, identify the true question, retrieve relevant facts and reason over them to answer correctly. RL training on KeyChain data induces an emergent plan-retrieve-reason-recheck reasoning pattern that generalizes far beyond training length. Models trained at 16K effectively solve 128K tasks without prohibitive full-length RL rollout costs. On Qwen2.5-7B and 14B, LoongRL substantially improves long-context multi-hop QA accuracy by +23.5% and +21.1% absolute gains. The resulting LoongRL-14B reaches a score of 74.2, rivaling much larger frontier models such as o3-mini (74.5) and DeepSeek-R1 (74.9). It also improves long-context retrieval, passes all 128K needle-in-a-haystack stress tests, and preserves short-context reasoning capabilities.


Explainable AI-Enhanced Supervisory Control for Robust Multi-Agent Robotic Systems

Pirayeshshirazinezhad, Reza, Fathi, Nima

arXiv.org Artificial Intelligence

We present an explainable AI-enhanced supervisory control framework for multi-agent robotics that combines (i) a timed-automata supervisor for safe, auditable mode switching, (ii) robust continuous control (Lyapunov-based controller for large-angle maneuver; sliding-mode controller (SMC) with boundary layers for precision and disturbance rejection), and (iii) an explainable predictor that maps mission context to gains and expected performance (energy, error). Monte Carlo-driven optimization provides the training data, enabling transparent real-time trade-offs. We validated the approach in two contrasting domains, spacecraft formation flying and autonomous underwater vehicles (AUVs). Despite different environments (gravity/actuator bias vs. hydrodynamic drag/currents), both share uncertain six degrees of freedom (6-DOF) rigid-body dynamics, relative motion, and tight tracking needs, making them representative of general robotic systems. In the space mission, the supervisory logic selects parameters that meet mission criteria. In AUV leader-follower tests, the same SMC structure maintains a fixed offset under stochastic currents with bounded steady error. In spacecraft validation, the SMC controller achieved submillimeter alignment with 21.7% lower tracking error and 81.4% lower energy consumption compared to Proportional-Derivative PD controller baselines. At the same time, in AUV tests, SMC maintained bounded errors under stochastic currents. These results highlight both the portability and the interpretability of the approach for safety-critical, resource-constrained multi-agent robotics.


Science Autonomy using Machine Learning for Astrobiology

Da Poian, Victoria, Theiling, Bethany, Lyness, Eric, Burtt, David, Azari, Abigail R., Pasterski, Joey, Chou, Luoth, Trainer, Melissa, Danell, Ryan, Kaplan, Desmond, Li, Xiang, Clough, Lily, McKinney, Brett, Mandrake, Lukas, Diamond, Bill, Freissinet, Caroline

arXiv.org Artificial Intelligence

AI and ML enable rapid processing of large datasets, and offer advanced feature extraction and pattern recognition capabilities that deliver meaningful insights, enhancing human analysts' ability to identify correlations within complex, multi - variable datasets. This is especially needed for astrobiology, where m odels must distinguish complex biotic patterns fro m intricate abiotic backgrounds. As data volume outpaces the capacity for timely data analysis, AI and ML become essential for data processing. They could also prove invaluable for the complex data analysis that will accompany flight instruments ' advancements. ML has been widely applied in image processing of large datasets in astrophysics and Earth observation ( e.g., crater identification [2 - 4], sample targeting [5]). Similar techniques that share methodology but are improved for onboard computational rest rictions could be leveraged for astrobiology missions to identify key features [6].


XAMI -- A Benchmark Dataset for Artefact Detection in XMM-Newton Optical Images

Dima, Elisabeta-Iulia, Gómez, Pablo, Kruk, Sandor, Kretschmar, Peter, Rosen, Simon, Popa, Călin-Adrian

arXiv.org Artificial Intelligence

Reflected or scattered light produce artefacts in astronomical observations that can negatively impact the scientific study. Hence, automated detection of these artefacts is highly beneficial, especially with the increasing amounts of data gathered. Machine learning methods are well-suited to this problem, but currently there is a lack of annotated data to train such approaches to detect artefacts in astronomical observations. In this work, we present a dataset of images from the XMM-Newton space telescope Optical Monitoring camera showing different types of artefacts. We hand-annotated a sample of 1000 images with artefacts which we use to train automated ML methods. We further demonstrate techniques tailored for accurate detection and masking of artefacts using instance segmentation. We adopt a hybrid approach, combining knowledge from both convolutional neural networks (CNNs) and transformer-based models and use their advantages in segmentation. The presented method and dataset will advance artefact detection in astronomical observations by providing a reproducible baseline. All code and data are made available (https://github.com/ESA-Datalabs/XAMI-model and https://github.com/ESA-Datalabs/XAMI-dataset).


The biggest science news stories of 2023 as chosen by New Scientist

New Scientist

From extraordinary heat to stunning space imagery, New Scientist has brought you extensive coverage of the biggest developments, discoveries and events in science, technology, health and environment in 2023. Here is our recap of some of the best stories this year. The number of heat records broken in 2023 is simply astounding. While we can't officially say for sure until final figures are in next month, this year was almost certainly the hottest year ever recorded. In a sign of how extraordinary temperatures have been this year, New Scientist was already reporting this possibility in mid-June, well before the northern hemisphere summer had got into gear.


Optimality Principles in Spacecraft Neural Guidance and Control

Izzo, Dario, Blazquez, Emmanuel, Ferede, Robin, Origer, Sebastien, De Wagter, Christophe, de Croon, Guido C. H. E.

arXiv.org Artificial Intelligence

Spacecraft and drones aimed at exploring our solar system are designed to operate in conditions where the smart use of onboard resources is vital to the success or failure of the mission. Sensorimotor actions are thus often derived from high-level, quantifiable, optimality principles assigned to each task, utilizing consolidated tools in optimal control theory. The planned actions are derived on the ground and transferred onboard where controllers have the task of tracking the uploaded guidance profile. Here we argue that end-to-end neural guidance and control architectures (here called G&CNets) allow transferring onboard the burden of acting upon these optimality principles. In this way, the sensor information is transformed in real time into optimal plans thus increasing the mission autonomy and robustness. We discuss the main results obtained in training such neural architectures in simulation for interplanetary transfers, landings and close proximity operations, highlighting the successful learning of optimality principles by the neural model. We then suggest drone racing as an ideal gym environment to test these architectures on real robotic platforms, thus increasing confidence in their utilization on future space exploration missions. Drone racing shares with spacecraft missions both limited onboard computational capabilities and similar control structures induced from the optimality principle sought, but it also entails different levels of uncertainties and unmodelled effects. Furthermore, the success of G&CNets on extremely resource-restricted drones illustrates their potential to bring real-time optimal control within reach of a wider variety of robotic systems, both in space and on Earth.


Assurance for Autonomy -- JPL's past research, lessons learned, and future directions

Feather, Martin S., Pinto, Alessandro

arXiv.org Artificial Intelligence

Robotic space missions have long depended on automation, defined in the 2015 NASA Technology Roadmaps as "the automatically-controlled operation of an apparatus, process, or system using a pre-planned set of instructions (e.g., a command sequence)," to react to events when a rapid response is required. Autonomy, defined there as "the capacity of a system to achieve goals while operating independently from external control," is required when a wide variation in circumstances precludes responses being pre-planned, instead autonomy follows an on-board deliberative process to determine the situation, decide the response, and manage its execution. Autonomy is increasingly called for to support adventurous space mission concepts, as an enabling capability or as a significant enhancer of the science value that those missions can return. But if autonomy is to be allowed to control these missions' expensive assets, all parties in the lifetime of a mission, from proposers through ground control, must have high confidence that autonomy will perform as intended to keep the asset safe to (if possible) accomplish the mission objectives. The role of mission assurance is a key contributor to providing this confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy. To remedy this situation, researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy. This paper summarizes over two decades of this research, and offers a vision of where further work is needed to address open issues.


Enabling Astronaut Self-Scheduling using a Robust Advanced Modelling and Scheduling system: an assessment during a Mars analogue mission

Saint-Guillain, Michael, Vanderdonckt, Jean, Burny, Nicolas, Pletser, Vladimir, Vaquero, Tiago, Chien, Steve, Karl, Alexander, Marquez, Jessica, Karasinski, John, Wain, Cyril, Comein, Audrey, Casla, Ignacio S., Jacobs, Jean, Meert, Julien, Chamart, Cheyenne, Drouet, Sirga, Manon, Julie

arXiv.org Artificial Intelligence

Human long duration exploration missions (LDEMs) raise a number of technological challenges. This paper addresses the question of the crew autonomy: as the distances increase, the communication delays and constraints tend to prevent the astronauts from being monitored and supported by a real time ground control. Eventually, future planetary missions will necessarily require a form of astronaut self-scheduling. We study the usage of a computer decision-support tool by a crew of analog astronauts, during a Mars simulation mission conducted at the Mars Desert Research Station (MDRS, Mars Society) in Utah. The proposed tool, called Romie, belongs to the new category of Robust Advanced Modelling and Scheduling (RAMS) systems. It allows the crew members (i) to visually model their scientific objectives and constraints, (ii) to compute near-optimal operational schedules while taking uncertainty into account, (iii) to monitor the execution of past and current activities, and (iv) to modify scientific objectives/constraints w.r.t. unforeseen events and opportunistic science. In this study, we empirically measure how the astronauts, who are novice planners, perform at using such a tool when self-scheduling under the realistic assumptions of a simulated Martian planetary habitat.


Artificial Intelligence and Natural Language Processing and Understanding in Space: A Methodological Framework and Four ESA Case Studies

Gómez-Pérez, José Manuel, García-Silva, Andrés, Leone, Rosemarie, Albani, Mirko, Fontaine, Moritz, Poncet, Charles, Summerer, Leopold, Donati, Alessandro, Roma, Ilaria, Scaglioni, Stefano

arXiv.org Artificial Intelligence

The European Space Agency is well known as a powerful force for scientific discovery in numerous areas related to Space. The amount and depth of the knowledge produced throughout the different missions carried out by ESA and their contribution to scientific progress is enormous, involving large collections of documents like scientific publications, feasibility studies, technical reports, and quality management procedures, among many others. Through initiatives like the Open Space Innovation Platform, ESA also acts as a hub for new ideas coming from the wider community across different challenges, contributing to a virtuous circle of scientific discovery and innovation. Handling such wealth of information, of which large part is unstructured text, is a colossal task that goes beyond human capabilities, hence requiring automation. In this paper, we present a methodological framework based on artificial intelligence and natural language processing and understanding to automatically extract information from Space documents, generating value from it, and illustrate such framework through several case studies implemented across different functional areas of ESA, including Mission Design, Quality Assurance, Long-Term Data Preservation, and the Open Space Innovation Platform. In doing so, we demonstrate the value of these technologies in several tasks ranging from effortlessly searching and recommending Space information to automatically determining how innovative an idea can be, answering questions about Space, and generating quizzes regarding quality procedures. Each of these accomplishments represents a step forward in the application of increasingly intelligent AI systems in Space, from structuring and facilitating information access to intelligent systems capable to understand and reason with such information.